Support the fact-based journalism you rely on with a donation to Marketplace today. Give Now!
AI in schools creates greater risk for marginalized students, researchers find
Sep 22, 2023

AI in schools creates greater risk for marginalized students, researchers find

HTML EMBED:
COPY
Elizabeth Laird of the Center for Democracy & Technology recounts artificial intelligence’s chaotic introduction into education. Disabled students in particular are using AI more, she says, and getting into trouble more.

When ChatGPT came on the scene in November, it sent schools across the country into a panic. Some districts immediately started setting rules around how students could use artificial intelligence programs in their schoolwork. Others moved to ban them altogether. All this happened while information about the good and the bad of AI’s foray into classrooms was still pretty scarce.

Researchers at the Center for Democracy & Technology, based in Washington, D.C., gathered data to counter some of the hype. Marketplace’s Lily Jamali discussed it with Elizabeth Laird, CDT’s director of equity in civic technology and a co-author of a report out this week. The following is an edited transcript of their conversation.

Elizabeth Laird: Fifty-eight percent of students say that they have used generative AI either for academic purposes or for personal reasons.

Lily Jamali: That’s a lot.

Laird: It is, although it is actually lower than what teachers think is happening. Only 19% of students say that they have used generative AI to write and submit papers. If you ask teachers the same question, 60% of teachers think that students have done that. And so what teachers think is happening is partly driven by the hype around generative AI and may not actually be what students are doing.

Elizabeth Laird smiles for her headshot wearing a pink top. She poses in front of a sign that says "Center for Democracy and Technology."
Elizabeth Laird (Courtesy John Will)

Jamali: So what are the risks when there’s such a mismatch between what students are doing and what teachers think they’re doing?

Laird: This technology introduces mistrust and creates an adversarial relationship between teachers and students. This technology came onto the scene in the middle of a school year, and so we found that its implementation was quite chaotic. Only a quarter of teachers ever received guidance from their school about how to respond if they suspect a student might be using this technology in ways that aren’t allowed. And then the last thing that I’ll call out is that students with disabilities are far more likely to use generative AI. And they are also more likely to get in trouble. And so anytime you see a group of students who have been historically marginalized, and you see they’re getting in trouble more, that should raise the alarm that we need to look at what our policies and practices are around this so that we’re not disproportionately hurting what are protected classes of students.

Jamali: What do you think schools and school districts could be doing to manage all these changes that are being brought on by AI?

Laird: We found both teachers who work at Title 1 schools, so those are schools that typically serve lower-income communities, but also licensed special education teachers, so both, they’re more likely to know of students who have gotten in trouble due to generative AI. They’re also more likely to say that their school filters content based on whether students are LGBTQ+ or students of color. And so in many cases, especially with new technology, I think we find ourselves talking about what new protections do we need, what new regulations and laws do we need? In this case, we can actually start with existing protections and look at the ways that AI may be deployed that could potentially run afoul of existing civil rights protections and then give guidance to schools and the companies that work with them about how to avoid that.

More on this

Laird’s report also highlights how schools are monitoring digital activity, which is not limited to devices issued by schools; it’s also happening on personal devices that students bring with them. This can happen when a device is connected to a school network is being charged through a school device or when a student happens to be logged into a school account on their personal device. One parent noted this monitoring has real-life consequences, including a visit from law enforcement after their child voiced an opinion on a site that wasn’t blocked.

Here’s another interview with Laird from 2021, when student activity monitoring was still pretty new. It was the height of the COVID-19 pandemic, when many schools were trying to bridge the digital divide by providing devices like laptops and tablets to students.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer